intellectual property
Disney accuses ByteDance of 'virtual smash-and-grab' when using copyrighted works to train its AI
Samsung Galaxy Unpacked 2026 is Feb. 25 Valve's Steam Machine: Everything we know Even though ByteDance just released Seedance 2.0 on Thursday, it's already earned praise, but also indignation from Hollywood studios, when it comes to its AI-generating capabilities. With the strong early momentum, Seedance has already found itself in hot water with one of the largest media companies in the world. However, it's not the first time that Disney has threatened legal action against an AI company, since Character.AI received a cease-and-desist letter for the same offense in September. On the other hand, Disney partnered with OpenAI in a three-year licensing agreement that allows the AI giant to generate images and videos using that highly sought-after intellectual property. By subscribing, you are agreeing to Engadget's Terms and Privacy Policy .
The Fight on Capitol Hill to Make It Easier to Fix Your Car
As vehicles grow more software-dependent, repairing them has become harder than ever. A bill in the US House called the Repair Act would ease those restrictions, but it comes with caveats. Every time you get behind the wheel, your car is collecting data about you. Where you go, how fast you're driving, how hard you brake, and even how much you weigh. All of that data is not typically available to the vehicle owner.
- North America > United States > New York (0.05)
- South America > Venezuela (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (2 more...)
- Law > Statutes (1.00)
- Automobiles & Trucks (1.00)
- Information Technology > Security & Privacy (0.70)
- Government > Regional Government > North America Government > United States Government (0.47)
User Negotiations of Authenticity, Ownership, and Governance on AI-Generated Video Platforms: Evidence from Sora
Shen, Bohui, Bhatta, Shrikar, Ireebanije, Alex, Liu, Zexuan, Choudhry, Abhinav, Gumusel, Ece, Zhou, Kyrie Zhixuan
As AI-generated video platforms rapidly advance, ethical challenges such as copyright infringement emerge. This study examines how users make sense of AI-generated videos on OpenAI's Sora by conducting a qualitative content analysis of user comments. Through a thematic analysis, we identified four dynamics that characterize how users negotiate authenticity, authorship, and platform governance on Sora. First, users acted as critical evaluators of realism, assessing micro-details such as lighting, shadows, fluid motion, and physics to judge whether AI-generated scenes could plausibly exist. Second, users increasingly shifted from passive viewers to active creators, expressing curiosity about prompts, techniques, and creative processes. Text prompts were perceived as intellectual property, generating concerns about plagiarism and remixing norms. Third, users reported blurred boundaries between real and synthetic media, worried about misinformation, and even questioned the authenticity of other commenters, suspecting bot-generated engagement. Fourth, users contested platform governance: some perceived moderation as inconsistent or opaque, while others shared tactics for evading prompt censorship through misspellings, alternative phrasing, emojis, or other languages. Despite this, many users also enforced ethical norms by discouraging the misuse of real people's images or disrespectful content. Together, these patterns highlighted how AI-mediated platforms complicate notions of reality, creativity, and rule-making in emerging digital ecosystems. Based on the findings, we discuss governance challenges in Sora and how user negotiations inform future platform governance.
- North America > United States > Texas (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Asia > Middle East > Jordan (0.04)
- Africa > Nigeria (0.04)
- Media (1.00)
- Law > Intellectual Property & Technology Law (1.00)
Enhancing Model Privacy in Federated Learning with Random Masking and Quantization
Xu, Zhibo, Zhu, Jianhao, Xu, Jingwen, Lv, Changze, Huang, Zisu, Wang, Xiaohua, Wu, Muling, Qian, Qi, Zheng, Xiaoqing, Huang, Xuanjing
The primary goal of traditional federated learning is to protect data privacy by enabling distributed edge devices to collaboratively train a shared global model while keeping raw data decentralized at local clients. The rise of large language models (LLMs) has introduced new challenges in distributed systems, as their substantial computational requirements and the need for specialized expertise raise critical concerns about protecting intellectual property (IP). This highlights the need for a federated learning approach that can safeguard both sensitive data and proprietary models. To tackle this challenge, we propose FedQSN, a federated learning approach that leverages random masking to obscure a subnetwork of model parameters and applies quantization to the remaining parameters. Consequently, the server transmits only a privacy-preserving proxy of the global model to clients during each communication round, thus enhancing the model's confidentiality. Experimental results across various models and tasks demonstrate that our approach not only maintains strong model performance in federated learning settings but also achieves enhanced protection of model parameters compared to baseline methods.
- Europe > Spain > Andalusia > Málaga Province > Málaga (0.04)
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Scott Farquhar thinks Australia should let AI train for free on creative content. He overlooks one key point
Farquhar, the Tech Council of Australia CEO, told ABC's 7.30 program on Tuesday: "all AI usage of mining or searching or going across data is probably illegal under Australian law and I think that hurts a lot of investment of these companies in Australia". Farquhar's claim overlooks that this is not a settled issue in the US, and could have devastating effects on creative industries. Farquhar's argument is that it is not theft of people's work unless the AI is used to "copy an artist directly" such as creating a song in their style. "I do think people would say that, hey, if people are going to sit down with a digital companion, an AI song creator and they collaboratively work with an AI to create something new to the world, that's probably fair use." Farquhar said the benefits of large language models outweigh the issues raised by AI training its data on other people's work for free.
- Oceania > Australia (1.00)
- North America > United States (0.75)
Use of AI could worsen racism and sexism in Australia, human rights commissioner warns
AI risks entrenching racism and sexism in Australia, the human rights commissioner has warned, amid internal Labor debate about how to respond to the emerging technology. Lorraine Finlay says the pursuit of productivity gains from AI should not come at the expense of discrimination if the technology is not properly regulated. Finlay's comments follow Labor senator Michelle Ananda-Rajah breaking ranks to call for all Australian data to be "freed" to tech companies to prevent AI perpetuating overseas biases and reflect Australian life and culture. Ananda-Rajah is opposed to a dedicated AI act but believes content creators should be paid for their work. Media and arts groups have warned of "rampant theft" of intellectual property if big tech companies can take their content to train AI models.
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)
Watermarking Kolmogorov-Arnold Networks for Emerging Networked Applications via Activation Perturbation
Lu, Chia-Hsun, Wu, Guan-Jhih, Ho, Ya-Chi, Shen, Chih-Ya
--With the increasing importance of protecting intellectual property in machine learning, watermarking techniques have gained significant attention. As advanced models are increasingly deployed in domains such as social network analysis, the need for robust model protection becomes even more critical. While existing watermarking methods have demonstrated effectiveness for conventional deep neural networks, they often fail to adapt to the novel architecture, Kolmogorov-Arnold Networks (KAN), which feature learnable activation functions. KAN holds strong potential for modeling complex relationships in network-structured data. However, their unique design also introduces new challenges for watermarking. Therefore, we propose a novel watermarking method, Discrete Cosine Transform-based Activation W atermarking ( DCT-AW), tailored for KAN. Leveraging the learnable activation functions of KAN, our method embeds watermarks by perturbing activation outputs using discrete cosine transform, ensuring compatibility with diverse tasks and achieving task independence. Experimental results demonstrate that DCT-AW has a small impact on model performance and provides superior robustness against various watermark removal attacks, including fine-tuning, pruning, and retraining after pruning.
Principle-Guided Verilog Optimization: IP-Safe Knowledge Transfer via Local-Cloud Collaboration
Wang, Jing, Li, Zheng, Li, Lei, He, Fan, Lin, Liyu, Lai, Yao, Li, Yan, Zeng, Xiaoyang, Guo, Yufeng
Recent years have witnessed growing interest in adopting large language models (LLMs) for Register Transfer Level (RTL) code optimization. While powerful cloud-based LLMs offer superior optimization capabilities, they pose unacceptable intellectual property (IP) leakage risks when processing proprietary hardware designs. In this paper, we propose a new scenario where Verilog code must be optimized for specific attributes without leaking sensitive IP information. We introduce the first IP-preserving edge-cloud collaborative framework that leverages the benefits of both paradigms. Our approach employs local small LLMs (e.g., Qwen-2.5-Coder-7B) to perform secure comparative analysis between paired high-quality target designs and novice draft codes, yielding general design principles that summarize key insights for improvements. These principles are then used to query stronger cloud LLMs (e.g., Deepseek-V3) for targeted code improvement, ensuring that only abstracted and IP-safe guidance reaches external services. Our experimental results demonstrate that the framework achieves significantly higher optimization success rates compared to baseline methods. For example, combining Qwen-2.5-Coder-7B and Deepseek-V3 achieves a 66.67\% optimization success rate for power utilization, outperforming Deepseek-V3 alone (49.81\%) and even commercial models like GPT-4o (55.81\%). Further investigation of local and cloud LLM combinations reveals that different model pairings exhibit varying strengths for specific optimization objectives, with interesting trends emerging when varying the number of comparative code pairs. Our work establishes a new paradigm for secure hardware design optimization that balances performance gains with IP protection.
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Asia > China > Hong Kong (0.04)
- Semiconductors & Electronics (1.00)
- Information Technology > Security & Privacy (1.00)
Arts and media groups demand Labor take a stand against 'rampant theft' of Australian content to train AI
Arts, creative and media groups have demanded the government rule out allowing big tech companies to take Australian content to train their artificial intelligence models, with concerns such a shift would "sell out" Australian workers and lead to "rampant theft" of intellectual property. "It is not appropriate for big tech to steal the work of Australian artists, musicians, creators, news media, journalism, and use it for their own ends without paying for it," Ley said on Wednesday. In an interim report on "harnessing data and digital technology", the Productivity Commission set out proposals for how tech, including AI, could be regulated and treated in Australia, suggesting it could boost productivity by between 0.5% and 13% over the next decade, adding up to 116bn to Australia's GDP. The commission suggested several possible remedies, including expanding licensing schemes, or an exemption for "text and data mining" and expanding the existing fair dealing rules, which it said existed in other countries. The latter suggestion prompted fierce pushback from arts, creative and media companies, which raised alarm their work could be left open for massively wealthy tech companies to use – without compensation or payment – to train AI models.
US can't cut China off completely, but must defend AI and American innovation from nonstop theft: Sen Rounds
Republican South Dakota Sen. Mike Rounds spoke to Fox News Digital about the U.S.' efforts to remain the world's economic and military leader as China makes technological advances. SIMI VALLEY, CALIFORNIA – China's rampant theft of intellectual property from American institutions and industry must be thwarted as the U.S. battles to remain the world's economic and military leader – but America cannot completely decouple from the economic behemoth, Republican South Dakota Sen. Mike Rounds told Fox News Digital in an exclusive interview. The key, according to Rounds, is maintaining China as a key trade partner without giving them access to America's technology, including artificial intelligence and computer chips. "In doing so, maybe we'll restrict their ability to actually be able to have a market that they can create their own stuff with. And in using ours, they'll be our standards," Rounds told Fox News Digital in an exclusive interview from the Ronald Reagan Presidential Library during the Reagan National Economic Forum on Friday in California.
- North America > United States > South Dakota (0.46)
- North America > United States > California > Ventura County > Simi Valley (0.25)
- North America > United States > District of Columbia > Washington (0.05)
- (4 more...)